398 research outputs found

    Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for healthcare policy and decision making

    Get PDF
    Objective: Network meta-analyses have extensively been used to compare the effectiveness of multiple interventions for healthcare policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Study design and setting: Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov Chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. Results: We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whilst MMSE at threshold <25/30 appeared to have the best true negative rate. Conclusion: The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making

    Evidence Synthesis for Decision Making 6:Embedding Evidence Synthesis in Probabilistic Cost-effectiveness Analysis

    Get PDF
    When multiple parameters are estimated from the same synthesis model, it is likely that correlations will be induced between them. Network meta-analysis (mixed treatment comparisons) is one example where such correlations occur, along with meta-regression and syntheses involving multiple related outcomes. These correlations may affect the uncertainty in incremental net benefit when treatment options are compared in a probabilistic decision model, and it is therefore essential that methods are adopted that propagate the joint parameter uncertainty, including correlation structure, through the cost-effectiveness model. This tutorial paper sets out 4 generic approaches to evidence synthesis that are compatible with probabilistic cost-effectiveness analysis. The first is evidence synthesis by Bayesian posterior estimation and posterior sampling where other parameters of the cost-effectiveness model can be incorporated into the same software platform. Bayesian Markov chain Monte Carlo simulation methods with WinBUGS software are the most popular choice for this option. A second possibility is to conduct evidence synthesis by Bayesian posterior estimation and then export the posterior samples to another package where other parameters are generated and the cost-effectiveness model is evaluated. Frequentist methods of parameter estimation followed by forward Monte Carlo simulation from the maximum likelihood estimates and their variance-covariance matrix represent’a third approach. A fourth option is bootstrap resampling—a frequentist simulation approach to parameter uncertainty. This tutorial paper also provides guidance on how to identify situations in which no correlations exist and therefore simpler approaches can be adopted. Software suitable for transferring data between different packages, and software that provides a user-friendly interface for integrated software platforms, offering investigators a flexible way of examining alternative scenarios, are reviewed

    Evidence Synthesis for Decision Making 5:The Baseline Natural History Model

    Get PDF
    Most cost-effectiveness analyses consist of a baseline model that represents the absolute natural history under a standard treatment in a comparator set and a model for relative treatment effects. We review synthesis issues that arise on the construction of the baseline natural history model. We cover both the absolute response to treatment on the outcome measures on which comparative effectiveness is defined and the other elements of the natural history model, usually “downstream” of the shorter-term effects reported in trials. We recommend that the same framework be used to model the absolute effects of a “standard treatment” or placebo comparator as that used for synthesis of relative treatment effects and that the baseline model is constructed independently from the model for relative treatment effects, to ensure that the latter are not affected by assumptions made about the baseline. However, simultaneous modeling of baseline and treatment effects could have some advantages when evidence is very sparse or when other research or study designs give strong reasons for believing in a particular baseline model. The predictive distribution, rather than the fixed effect or random effects mean, should be used to represent the baseline to reflect the observed variation in baseline rates. Joint modeling of multiple baseline outcomes based on data from trials or combinations of trial and observational data is recommended where possible, as this is likely to make better use of available evidence, produce more robust results, and ensure that the model is internally coherent

    Evidence Synthesis for Decision Making 2:A Generalized Linear Modeling Framework for Pairwise and Network Meta-analysis of Randomized Controlled Trials

    Get PDF
    We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats

    Evidence Synthesis for Decision Making 1:Introduction

    Get PDF
    We introduce the series of 7 tutorial papers on evidence synthesis methods for decision making, based on the Technical Support Documents in Evidence Synthesis prepared for the National Institute for Health and Clinical Excellence (NICE) Decision Support Unit. Although oriented to NICE’s Technology Appraisal process, which examines new pharmaceutical products in a cost-effectiveness framework, the methods presented throughout the tutorials are equally relevant to clinical guideline development and to comparisons between medical devices, or public health interventions. Detailed guidance is given on how to use the other tutorials in the series, which propose a single evidence synthesis framework that covers fixed and random effects models, pairwise meta-analysis, indirect comparisons, and network meta-analysis, and where outcomes expressed in several different reporting formats can be analyzed without recourse to normal approximations. We describe the principles of evidence synthesis required by the 2008 revision of the NICE Guide to the Methods of Technology Appraisal and explain how the approach proposed in these tutorials was designed to conform to those requirements. We finish with some suggestions on how to present the evidence, the synthesis methods, and the results

    Evidence Synthesis for Decision Making 3:Heterogeneity Subgroups, Meta-Regression, Bias, and Bias-Adjustment

    Get PDF
    In meta-analysis, between-study heterogeneity indicates the presence of effect-modifiers and has implications for the interpretation of results in cost-effectiveness analysis and decision making. A distinction is usually made between true variability in treatment effects due to variation in patient populations or settings and biases related to the way in which trials were conducted. Variability in relative treatment effects threatens the external validity of trial evidence and limits the ability to generalize from the results; imperfections in trial conduct represent threats to internal validity. We provide guidance on methods for meta-regression and bias-adjustment, in pairwise and network meta-analysis (including indirect comparisons), using illustrative examples. We argue that the predictive distribution of a treatment effect in a “new” trial may, in many cases, be more relevant to decision making than the distribution of the mean effect. Investigators should consider the relative contribution of true variability and random variation due to biases when considering their response to heterogeneity. In network meta-analyses, various types of meta-regression models are possible when trial-level effect-modifying covariates are present or suspected. We argue that a model with a single interaction term is the one most likely to be useful in a decision-making context. Illustrative examples of Bayesian meta-regression against a continuous covariate and meta-regression against “baseline” risk are provided. Annotated WinBUGS code is set out in an appendix

    Baseline morphine consumption may explain between-study heterogeneity in meta-analyses of adjuvant analgesics and improve precision and accuracy of effect estimates

    Get PDF
    BACKGROUND: Statistical heterogeneity can increase the uncertainty of results and reduce the quality of evidence derived from systematic reviews. At present, it is uncertain what the major factors are that account for heterogeneity in meta-analyses of analgesic adjuncts. Therefore, the aim of this review was to identify whether various covariates could explain statistical heterogeneity and use this to improve accuracy when reporting the efficacy of analgesics. METHODS: We searched for reviews using MEDLINE, EMBASE, CINAHL, AMED, and the Cochrane Database of Systematic Reviews. First, we identified the existence of considerable statistical heterogeneity (I2 > 75%). Second, we conducted meta-regression analysis for the outcome of 24-hour morphine consumption using baseline risk (control group morphine consumption) and other clinical and methodological covariates. Finally, we constructed a league table of adjuvant analgesics using a novel method of reporting effect estimates assuming a fixed consumption of 50 mg postoperative morphine. RESULTS: We included 344 randomized controlled trials with 28,130 participants. Ninety-one percent of analyses showed considerable statistical heterogeneity. Baseline risk was a significant cause of between-study heterogeneity for acetaminophen, nonsteroidal anti-inflammatory drugs and cyclooxygenase-2 inhibitors, tramadol, ketamine, [alpha]2-agonists, gabapentin, pregabalin, lidocaine, magnesium, and dexamethasone (R2 = 21%-100%; P 10 mg). We could not exclude a moderate clinically significant effect with ketamine. Dexamethasone demonstrated a small clinical benefit (>5 mg). CONCLUSIONS: We empirically identified baseline morphine consumption as the major source of heterogeneity in meta-analyses of adjuvant analgesics across all surgical interventions. Controlling for baseline morphine consumption, clinicians can use audit data to estimate the morphine-reducing effect of adding any adjuvant for their local population, regardless which surgery they undergo. Moreover, we have utilized these findings to present a novel method of reporting and an amended method of graphically displaying effect estimates, which both reduces confounding from variable baseline risk in included trials and is able to adjust for other clinical and methodological confounding variables. We recommend use of these methods in clinical practice and future reviews of analgesics for postoperative pain

    Evidence Synthesis for Decision Making 7:A Reviewer's Checklist

    Get PDF
    This checklist is for the review of evidence syntheses for treatment efficacy used in decision making based on either efficacy or cost-effectiveness. It is intended to be used for pairwise meta-analysis, indirect comparisons, and network meta-analysis, without distinction. It does not generate a quality rating and is not prescriptive. Instead, it focuses on a series of questions aimed at revealing the assumptions that the authors of the synthesis are expecting readers to accept, the adequacy of the arguments authors advance in support of their position, and the need for further analyses or sensitivity analyses. The checklist is intended primarily for those who review evidence syntheses, including indirect comparisons and network meta-analyses, in the context of decision making but will also be of value to those submitting syntheses for review, whether to decision-making bodies or journals. The checklist has 4 main headings: A) definition of the decision problem, B) methods of analysis and presentation of results, C) issues specific to network synthesis, and D) embedding the synthesis in a probabilistic cost-effectiveness model. The headings and implicit advice follow directly from the other tutorials in this series. A simple table is provided that could serve as a pro forma checklist

    Evidence Synthesis for Decision Making 4:Inconsistency in Networks of Evidence Based on Randomized Controlled Trials

    Get PDF
    Inconsistency can be thought of as a conflict between “direct” evidence on a comparison between treatments B and C and “indirect” evidence gained from AC and AB trials. Like heterogeneity, inconsistency is caused by effect modifiers and specifically by an imbalance in the distribution of effect modifiers in the direct and indirect evidence. Defining inconsistency as a property of loops of evidence, the relation between inconsistency and heterogeneity and the difficulties created by multiarm trials are described. We set out an approach to assessing consistency in 3-treatment triangular networks and in larger circuit structures, its extension to certain special structures in which independent tests for inconsistencies can be created, and describe methods suitable for more complex networks. Sample WinBUGS code is given in an appendix. Steps that can be taken to minimize the risk of drawing incorrect conclusions from indirect comparisons and network meta-analysis are the same steps that will minimize heterogeneity in pairwise meta-analysis. Empirical indicators that can provide reassurance and the question of how to respond to inconsistency are also discussed
    • …
    corecore